Goto

Collaborating Authors

 Denver


Viral Ottawa Senators fan blamed for team's 0-2 playoff start banished to Taiwan

FOX News

A piece of the UFC White House event's setup is sitting in Pennsylvania Amish country Edward Cabrera's strikeout prop is the play as struggling Phillies face surging Cubs today Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense Charles Barkley was disgusted by Magic's highly questionable pregame handshake ChatGPT predicted the first round of the NFL Draft and here's what it said Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Trump weighs in on Iran's internal power struggle and Strait of Hormuz control Hasan Piker justifies'social murder' of CEO Fox News celebrates'Bring Your Kids to Work Day' Trump says there's'no time frame' to secure Iran deal Iranian activist praises Trump's intervention after female protesters saved from execution OutKick Viral Ottawa Senators fan blamed for team's 0-2 playoff start banished to Taiwan US men's hockey team faces media backlash after White House visit, SOTU appearance Fox News contributors Marc Thiessen and Ari Fleischer discuss the U.S. men's hockey team's White House visit and State of the Union appearance, breaking down media reactions to the events on'Fox & Friends.' When it comes to sports superstitions, they don't make them much more militant than teams and players in the Stanley Cup Playoffs . Everyone on the team has to grow playoff beards, and if the team ate at a certain restaurant then had a great game, guess where you're eating for the next two months during home games? Hell, even Sidney Crosby has been wearing the same jockstrap for 20 years because of how superstitious he was (okay, that was TMI, I apologize). Suffice it to say, teams can get a little paranoid when it comes to luck and bad omens in the playoffs, which is why the Ottawa Senators had to act accordingly after their team fell down 0-2 against the Carolina Hurricanes.


Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense

FOX News

Charles Barkley was disgusted by Magic's highly questionable pregame handshake ChatGPT predicted the first round of the NFL Draft and here's what it said Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Longtime NASCAR crew chief tells wild story about one of the sport's biggest characters WNBA finally embraces Caitlin Clark's stardom with unprecedented national TV schedule Why are the Mets so bad? Flyers mascot Gritty pens letter to fans ahead of first playoff game... eight years after he debuted NFL Draft prospect Rueben Bain Jr. mum about 2024 crash when publicly asked about it for first time Troy Aikman is selling'fire suites,' which are exactly what they sound like Trump says there's'no time frame' to secure Iran deal Iranian activist praises Trump's intervention after female protesters saved from execution Steve Hilton praised for'offering solutions' in CA gubernatorial debate Middle East tensions escalate over US blockade, Iran's actions OutKick Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense The Timberwolves team total is set at 116.5, and the play is the under after McDaniels' remarks T-Wolves take Game 2 vs. Nuggets, Will this series go to 7 games? Anthony Edwards scored 30 points in the Minnesota Timberwolves' Game 2 win over the Denver Nuggets. Colin Cowherd compares the two teams and asks if this series will go the distance. We had two NBA playoff games last night with the Pistons taking down the Magic (and me taking a loss on my play) thanks to an absolutely brutal third quarter from Orlando.


Ensemble-Based Dirichlet Modeling for Predictive Uncertainty and Selective Classification

Franzen, Courtney, Pourkamali-Anaraki, Farhad

arXiv.org Machine Learning

Neural network classifiers trained with cross-entropy loss achieve strong predictive accuracy but lack the capability to provide inherent predictive uncertainty estimates, thus requiring external techniques to obtain these estimates. In addition, softmax scores for the true class can vary substantially across independent training runs, which limits the reliability of uncertainty-based decisions in downstream tasks. Evidential Deep Learning aims to address these limitations by producing uncertainty estimates in a single pass, but evidential training is highly sensitive to design choices including loss formulation, prior regularization, and activation functions. Therefore, this work introduces an alternative Dirichlet parameter estimation strategy by applying a method of moments estimator to ensembles of softmax outputs, with an optional maximum-likelihood refinement step. This ensemble-based construction decouples uncertainty estimation from the fragile evidential loss design while also mitigating the variability of single-run cross-entropy training, producing explicit Dirichlet predictive distributions. Across multiple datasets, we show that the improved stability and predictive uncertainty behavior of these ensemble-derived Dirichlet estimates translate into stronger performance in downstream uncertainty-guided applications such as prediction confidence scoring and selective classification.


Time-Warping Recurrent Neural Networks for Transfer Learning

Hirschi, Jonathon

arXiv.org Machine Learning

Dynamical systems describe how a physical system evolves over time. Physical processes can evolve faster or slower in different environmental conditions. We use time-warping as rescaling the time in a model of a physical system. This thesis proposes a new method of transfer learning for Recurrent Neural Networks (RNNs) based on time-warping. We prove that for a class of linear, first-order differential equations known as time lag models, an LSTM can approximate these systems with any desired accuracy, and the model can be time-warped while maintaining the approximation accuracy. The Time-Warping method of transfer learning is then evaluated in an applied problem on predicting fuel moisture content (FMC), an important concept in wildfire modeling. An RNN with LSTM recurrent layers is pretrained on fuels with a characteristic time scale of 10 hours, where there are large quantities of data available for training. The RNN is then modified with transfer learning to generate predictions for fuels with characteristic time scales of 1 hour, 100 hours, and 1000 hours. The Time-Warping method is evaluated against several known methods of transfer learning. The Time-Warping method produces predictions with an accuracy level comparable to the established methods, despite modifying only a small fraction of the parameters that the other methods modify.


Low-Rank Compression of Pretrained Models via Randomized Subspace Iteration

Pourkamali-Anaraki, Farhad

arXiv.org Machine Learning

The massive scale of pretrained models has made efficient compression essential for practical deployment. Low-rank decomposition based on the singular value decomposition (SVD) provides a principled approach for model reduction, but its exact computation is expensive for large weight matrices. Randomized alternatives such as randomized SVD (RSVD) improve efficiency, yet they can suffer from poor approximation quality when the singular value spectrum decays slowly, a regime commonly observed in modern pretrained models. In this work, we address this limitation from both theoretical and empirical perspectives. First, we establish a connection between low-rank approximation error and predictive performance by analyzing softmax perturbations, showing that deviations in class probabilities are controlled by the spectral error of the compressed weights. Second, we demonstrate that RSVD is inadequate, and we propose randomized subspace iteration (RSI) as a more effective alternative. By incorporating multiple power iterations, RSI improves spectral separation and provides a controllable mechanism for enhancing approximation quality. We evaluate our approach on both convolutional networks and transformer-based architectures. Our results show that RSI achieves near-optimal approximation quality while outperforming RSVD in predictive accuracy under aggressive compression, enabling efficient model compression.


Auto-differentiable data assimilation: Co-learning of states, dynamics, and filtering algorithms

Adrian, Melissa, Sanz-Alonso, Daniel, Willett, Rebecca

arXiv.org Machine Learning

Data assimilation algorithms estimate the state of a dynamical system from partial observations, where the successful performance of these algorithms hinges on costly parameter tuning and on employing an accurate model for the dynamics. This paper introduces a framework for jointly learning the state, dynamics, and parameters of filtering algorithms in data assimilation through a process we refer to as auto-differentiable filtering. The framework leverages a theoretically motivated loss function that enables learning from partial, noisy observations via gradient-based optimization using auto-differentiation. We further demonstrate how several well-known data assimilation methods can be learned or tuned within this framework. To underscore the versatility of auto-differentiable filtering, we perform experiments on dynamical systems spanning multiple scientific domains, such as the Clohessy-Wiltshire equations from aerospace engineering, the Lorenz-96 system from atmospheric science, and the generalized Lotka-Volterra equations from systems biology. Finally, we provide guidelines for practitioners to customize our framework according to their observation model, accuracy requirements, and computational budget.